48 research outputs found

    Selection of endurance capabilities and the trade-off between pressure and volume in the evolution of the human heart

    Get PDF
    Chimpanzees and gorillas, when not inactive, engage primarily in short bursts of resistance physical activity (RPA), such as climbing and fighting, that creates pressure stress on the cardiovascular system. In contrast, to initially hunt and gather and later to farm, it is thought that preindustrial human survival was dependent on lifelong moderate-intensity endurance physical activity (EPA), which creates a cardiovascular volume stress. Although derived musculoskeletal and thermoregulatory adaptations for EPA in humans have been documented, it is unknown if selection acted similarly on the heart. To test this hypothesis, we compared left ventricular (LV) structure and function across semiwild sanctuary chimpanzees, gorillas, and a sample of humans exposed to markedly different physical activity patterns. We show the human LV possesses derived features that help augment cardiac output (CO) thereby enabling EPA. However, the human LV also demonstrates phenotypic plasticity and, hence, variability, across a wide range of habitual physical activity. We show that the human LV’s propensity to remodel differentially in response to chronic pressure or volume stimuli associated with intense RPA and EPA as well as physical inactivity represents an evolutionary trade-off with potential implications for contemporary cardiovascular health. Specifically, the human LV trades off pressure adaptations for volume capabilities and converges on a chimpanzee-like phenotype in response to physical inactivity or sustained pressure loading. Consequently, the derived LV and lifelong low blood pressure (BP) appear to be partly sustained by regular moderate-intensity EPA whose decline in postindustrial societies likely contributes to the modern epidemic of hypertensive heart disease

    Association Between Anatomical Location of Surgically Induced Lesions and Postoperative Seizure Outcome in Temporal Lobe Epilepsy

    Get PDF
    Background and objectivesTo determine the association between surgical lesions of distinct gray and white structures and connections with favorable postoperative seizure outcomes.MethodsPatients with drug-resistant temporal lobe epilepsy (TLE) from 3 epilepsy centers were included. We employed a voxel-based and connectome-based mapping approach to determine the association between favorable outcomes and surgery-induced temporal lesions. Analyses were conducted controlling for multiple confounders, including total surgical resection/ablation volume, hippocampal volumes, side of surgery, and site where the patient was treated.ResultsThe cohort included 113 patients with TLE (54 women; 86 right-handed; mean age at seizure onset 16.5 years [SD 11.9]; 54.9% left) who were 61.1% free of disabling seizures (Engel Class 1) at follow-up. Postoperative seizure freedom in TLE was associated with (1) surgical lesions that targeted the hippocampus as well as the amygdala-piriform cortex complex and entorhinal cortices; (2) disconnection of temporal, frontal, and limbic regions through loss of white matter tracts within the uncinate fasciculus, anterior commissure, and fornix; and (3) functional disconnection of the frontal (superior and middle frontal gyri, orbitofrontal region) and temporal (superior and middle pole) lobes.DiscussionBetter postoperative seizure freedom is associated with surgical lesions of specific structures and connections throughout the temporal lobes. These findings shed light on the key components of epileptogenic networks in TLE and constitute a promising source of new evidence for future improvements in surgical interventions.Classification of evidenceThis study provides Class II evidence that for patients with TLE, postoperative seizure freedom is associated with surgical lesions of specific temporal lobe structures and connections

    The impact of surgical delay on resectability of colorectal cancer: An international prospective cohort study

    Get PDF
    AIM: The SARS-CoV-2 pandemic has provided a unique opportunity to explore the impact of surgical delays on cancer resectability. This study aimed to compare resectability for colorectal cancer patients undergoing delayed versus non-delayed surgery. METHODS: This was an international prospective cohort study of consecutive colorectal cancer patients with a decision for curative surgery (January-April 2020). Surgical delay was defined as an operation taking place more than 4 weeks after treatment decision, in a patient who did not receive neoadjuvant therapy. A subgroup analysis explored the effects of delay in elective patients only. The impact of longer delays was explored in a sensitivity analysis. The primary outcome was complete resection, defined as curative resection with an R0 margin. RESULTS: Overall, 5453 patients from 304 hospitals in 47 countries were included, of whom 6.6% (358/5453) did not receive their planned operation. Of the 4304 operated patients without neoadjuvant therapy, 40.5% (1744/4304) were delayed beyond 4 weeks. Delayed patients were more likely to be older, men, more comorbid, have higher body mass index and have rectal cancer and early stage disease. Delayed patients had higher unadjusted rates of complete resection (93.7% vs. 91.9%, P = 0.032) and lower rates of emergency surgery (4.5% vs. 22.5%, P < 0.001). After adjustment, delay was not associated with a lower rate of complete resection (OR 1.18, 95% CI 0.90-1.55, P = 0.224), which was consistent in elective patients only (OR 0.94, 95% CI 0.69-1.27, P = 0.672). Longer delays were not associated with poorer outcomes. CONCLUSION: One in 15 colorectal cancer patients did not receive their planned operation during the first wave of COVID-19. Surgical delay did not appear to compromise resectability, raising the hypothesis that any reduction in long-term survival attributable to delays is likely to be due to micro-metastatic disease

    Illness Perceptions Predict Cognitive Performance Validity

    No full text
    © 2018 The International Neuropsychological Society. Objectives: The aim of this study was to investigate the relationship of psychological variables to cognitive performance validity test (PVT) results in mixed forensic and nonforensic clinical samples. Methods: Participants included 183 adults who underwent comprehensive neuropsychological examination. Criterion groups were formed, that is, Credible Group or Noncredible Group, based upon their performance on the Word Memory Test and other stand-alone and embedded PVT measures. Results: Multivariate logistic regression analysis identified three significant predictors of cognitive performance validity. These included two psychological constructs, for example, Cogniphobia (perception that cognitive effort will exacerbate neurological symptoms), and Symptom Identity (perception that current symptoms are the result of illness or injury), and one contextual factor (forensic). While there was no interaction between these factors, elevated scores were most often observed in the forensic sample, suggesting that these independently contributing intrinsic psychological factors are more likely to occur in a forensic environment. Conclusions: Illness perceptions were significant predictors of cognitive performance validity particularly when they reached very elevated levels. Extreme elevations were more common among participants in the forensic sample, and potential reasons for this pattern are explored
    corecore